Practical Activities on PUFs and TRNGs

Sergio Vinagrero

TIMA Laboratory

2025-10-29

Workshop Goals

Use SRAM cells as sources of entropy

Extract startup values and study their behaviour

Use these values to create a PUF and a TRNG


📋 Microcontrollers + Tools provided in the repository

Who am I?

PhD in Grenoble, France

Methodologies for the Design, the Modeling, and the Quality Assessment of Physical Unclonable Functions (PUFs)


PostDoc at NEUROPULS project in Lyon and Grenoble, France

Development of authentication protocols restistant to Machine Learning attacks based on Photonic-based Physical Unclonable Functions

Programme

14:00-15:45

Confirm environment setup

Save multiple readouts per device

Analyze reliability per bit


15:45-16:15

Coffe-break


16:15-18:00

Compute PUF metrics

Perform TRNG test suite

Wrap up

SRAM as Entropy Source

Each SRAM cell is made of 2 cross-coupled inverters

Due to process variability, the inverters have different strength

On power-up, each cell randomly settles to 0 or 1

It’s this random distribution of values that we are going to study and exploit

Device Under Test

During this workshop, we will be using the STM32L152RC microcontroller


256 Kbytes of flash memory

32 Kbytes of RAM = 262144 bits

8 Kbytes of data EEPROM

Can be controlled easily through ST-LINK/V2 debugger/programmer

STM32 Discovery

SRAM as PUF

PUF Metrics

Uniformity(d) \(\small = \frac{1}{L} \sum_{l=1}^L R_{d,i}\)


Bit-aliasing(c) \(\small = \frac{1}{D} \sum_{d=1}^D R_{d,c}\)


Reliability \(\small = 1 - \frac{1}{S}\sum_{s=1}^S \tfrac{HD(R_{d,ref}, R_{d,s})}{L}\)


Uniqueness \(\small = \frac{2}{D(D-1)}\sum_{i\neq j} \tfrac{HD(R_i, R_j)}{L}\)

The Hamming Distance between two vectors of equal length is the number of positions at which the corresponding values are different.

Reliability per bit

Measure how reliable each bit is across multiple measurements

\[ Reliability = 1 - \frac{1}{S}\sum_{s=1}^S XOR(R_{i,ref}, R_{l,s}) \]

Where \(S\) is the number of readouts, \(R_{l,ref}\) is the \(l\)-th reference bit and \(R_{l,s}\) is the \(l\)-th bit from readout \(s\).

You will probably obtain a Reliability per bit distribution similar to this one


This level of granularity allows us to select bits valid for PUF or TRNG

Example Metrics Setup

For this example, the CRPs are generated randomnly.

You will need to load the readouts you obtained into a similar format.

import numpy as np
from itertools import combinations

# 8 devices with 32 bits each and 10 samples plus the reference response
C, D, S = 32, 8, 11

Devices = np.arange(D)     # The device ids
Challenges = np.arange(C)  # The challenges
Samples = np.arange(1, S)  # The samples. The Sample 0 is the reference

# We are going to generate random CRPS
# We are interested in the reference sample
# The rest of the samples are generated below
crps = np.random.randint(0, 2, (C, D, S), dtype=np.int8)
ref = crps[:,:,0]

err_rate = 0.2 # We set some bit error-rate
for s in Samples:
    crps[:,:, s] = np.bitwise_xor(ref, np.random.rand(C, D) <= err_rate)

Example Metrics Computation

# Uniformity is the mean response per device
uniformity = np.mean(ref, axis=0)

# Uniformity is the mean response per challenge
bit_aliasing = np.mean(ref, axis=1)

# The number of device pairs, without repetition
num_pairs = D*(D-1) // 2
uniqueness = np.zeros(num_pairs, dtype=float)

pairs = np.array(list(combinations(Devices, 2)))
for p in np.nditer(pairs):
    f, s = pairs[p]
    uniqueness[p] = np.bitwise_xor(ref[:, f], ref[:,s]).mean()

matches = [np.bitwise_and(ref, crps[:,:,s]) for s in Samples]
reliability = np.dstack(matches).mean(axis=2)

# Compute the reliability per bit and check against the expected distribution
reliability_per_bit = reliability.mean(axis=1)

SRAM as TRNG

SRAM as TRNG

By analyzing the readouts, we have seen the distribution of reliability per bit.

We (should) have seen that the distribution of reliability per bit is assymetric, and there are a few bits that are very unreliable.

Instead of ignoring these bits, we will aggregate them into a single bitstream that we will treat as the output of a True Random Number Generator (TRNG)

We are interested in unreliable bits. The threshold used here is arbitrary

TRNG Evaluation

There are plenty of statistical tests for randomness

  • Dieharder, NIST SP 800-22, NIST SP 800-90B, PractRand, TestU01, AIS, …

Most statistical tests provide a p-value that can be used to determine if the sequence passes the test


It’s important to state that even if all test pass, it’s not a definitive proof

NIST SP 800-22 test suite

  1. The Frequency (Monobit) Test
  2. Frequency Test within a Block
  3. The Runs Test
  4. Tests for the Longest-Run-of-Ones in a Block
  5. The Binary Matrix Rank Test
  6. The Discrete Fourier Transform (Spectral) Test
  7. The Non-overlapping Template Matching Test
  8. The Overlapping Template Matching Test
  9. Maurer’s “Universal Statistical” Test
  10. The Linear Complexity Test
  11. The Serial Test
  12. The Approximate Entropy Test
  13. The Cumulative Sums (Cusums) Test
  14. The Random Excursions Test
  15. The Random Excursions Variant Test

Shannon Entropy

Entropy measures the expected amount of information needed to describe the state of the variable. In this context, is the number of bits needed to encode a bitstream.

\[H(x) = -\sum_{x\in X} p(x) \log(x)\]

For binary variables \(X\in \{0, 1\}\), where \(p = \mathbb{P}[X=1]\)

\[H(p) = -\left[p\log_2(p) + (1-p)\log_2(1-p)\right]\]

Example on Shannon Entropy

While there are many estimators of entropy, we stick to using the “Naive” Shannon Entropy estimator.

The entropy of the bitstream should be very close to 1.

import numpy as np

bitstream = np.array(np.random.random(10**7) >= 0.5, dtype=np.int8)
p_one = bits.mean()

def H(p):
    if p == 0.0 or p == 1.0:
        return 0.0
    return -(p*np.log2(p) + (1-p)*np.log2(1-p))

entropy = H(p_one)

Entropy vs Reliability

Reliability and Entropy share an inverse relationship:

  • Responses that are very reliable are usually biased
  • Conversely, unbiased responses tend to be unreliable

Study the relationship between Entropy and Reliability

We can classify responses based on their Entropy and Reliability

Workshop Summary

Power cycle the device and obtain SRAM startup data (10 readouts minimum)

Upload the data to the repository for other groups

Study the data and compute the metrics

Analyze Reliability per bit

For the PUF

Select reliable bits for a PUF key

Build PUF key

For the TRNG

Extract unstable bits for a TRNG

Perform Randomness test suite

Practical Notes

Tools in the repository

The repository contains the following resources:

├── data                       # Directory to store readouts
├── docs                       # Presentation resources
├── 00-install_st_openocd.sh   # Script to install openocd
├── 01-collect_readouts.sh     # Script to mass erase and obtain a readout
├── 02-get_nist.sh             # Script to download the NIST suite
├── openocd_stm32.py           # Python wrapper around openocd
├── stm31l152re.template.cfg   # Template configuration for openocd
└── README.md


You can take a look at the README.md file for more instructions on how to install the necesary tools and how to use them.

Setting up the environment

There are a series of script that download the necesary tools.

git clone https://github.com/servinagrero/workshop_turin.git
cd workshop_turin

You can take a look and modify the scripts 00-install_st_openocd.sh and 02-get_nist.sh to download and build the openocd and NIST test suite.

bash 00-install_st_openocd.sh # To install openocd and configurations
bash 02-get_nist.sh           # To download and compile the NIST suite

The openocd tool should also be available from the repositories of your distribution.

Obtaining readouts

Take a look at the 01-collect_readouts.sh script to see how to use the openocd_stm32.py wrapper

To mass erase the flash (Should be done at least once)

python3 openocd_stm32.py --openocd-scripts "/path/to/openocd/tcl" \
    --interface "interface/stlink.cfg" --target="board/stm32ldiscovery.cfg" \
    flash --erase

To obtain a single readout make sure that the --target points to the correct configuration depending on the device used and that the correct memory size is passed to --size. For the discovery boards it should be 32kB (0x8000).

python3 openocd_stm32.py --openocd-scripts "/path/to/openocd/tcl" \
    --interface "interface/stlink.cfg" --target="board/stm32ldiscovery.cfg" \
    read --address 0x20000000 --size 0x8000 --dir "${READOUTS_DIR}"

The script 01-collect_readouts.sh will perform both operations automatically. Just make sure to modify the READOUTS_DIR variable to point to a subdirectory in the data directory to store the readouts.

Running NIST test suite

Take a look at the 02-get_nist.sh script to see how to run the test suite.

This test suite is known to be very finicky

The parameter to assess is the length of each bitstream. You should run a minimum of 10 bitstreams with 1000 bits each, but it’s better to run it on more data.

# If you don't have enough data, the program will segfault
./assess 10000 < config.txt

# The final results are written to a text file
cat experiments/AlgorithmTesting/finalAnalysisReport.txt